Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Recent work has pointed out the potential existence of a tight relation between the cosmological parameter Ω m , at fixed Ω b , and the properties of individual galaxies in state-of-the-art cosmological hydrodynamic simulations. In this paper, we investigate whether such a relation also holds for galaxies from simulations run with a different code that makes use of a distinct subgrid physics: Astrid. We also find that in this case, neural networks are able to infer the value of Ω m with a ∼10% precision from the properties of individual galaxies, while accounting for astrophysics uncertainties, as modeled in Cosmology and Astrophysics with MachinE Learning (CAMELS). This tight relationship is present at all considered redshifts, z ≤ 3, and the stellar mass, the stellar metallicity, and the maximum circular velocity are among the most important galaxy properties behind the relation. In order to use this method with real galaxies, one needs to quantify its robustness: the accuracy of the model when tested on galaxies generated by codes different from the one used for training. We quantify the robustness of the models by testing them on galaxies from four different codes: IllustrisTNG, SIMBA, Astrid, and Magneticum. We show that the models perform well on a large fraction of the galaxies, but fail dramatically on a small fraction of them. Removing these outliers significantly improves the accuracy of the models across simulation codes.more » « less
-
Abstract We discover analytic equations that can infer the value of Ωmfrom the positions and velocity moduli of halo and galaxy catalogs. The equations are derived by combining a tailored graph neural network (GNN) architecture with symbolic regression. We first train the GNN on dark matter halos from GadgetN-body simulations to perform field-level likelihood-free inference, and show that our model can infer Ωmwith ∼6% accuracy from halo catalogs of thousands ofN-body simulations run with six different codes: Abacus, CUBEP3M, Gadget, Enzo, PKDGrav3, and Ramses. By applying symbolic regression to the different parts comprising the GNN, we derive equations that can predict Ωmfrom halo catalogs of simulations run with all of the above codes with accuracies similar to those of the GNN. We show that, by tuning a single free parameter, our equations can also infer the value of Ωmfrom galaxy catalogs of thousands of state-of-the-art hydrodynamic simulations of the CAMELS project, each with a different astrophysics model, run with five distinct codes that employ different subgrid physics: IllustrisTNG, SIMBA, Astrid, Magneticum, SWIFT-EAGLE. Furthermore, the equations also perform well when tested on galaxy catalogs from simulations covering a vast region in parameter space that samples variations in 5 cosmological and 23 astrophysical parameters. We speculate that the equations may reflect the existence of a fundamental physics relation between the phase-space distribution of generic tracers and Ωm, one that is not affected by galaxy formation physics down to scales as small as 10h−1kpc.more » « less
-
Abstract We train graph neural networks to perform field-level likelihood-free inference using galaxy catalogs from state-of-the-art hydrodynamic simulations of the CAMELS project. Our models are rotational, translational, and permutation invariant and do not impose any cut on scale. From galaxy catalogs that only contain 3D positions and radial velocities of ∼1000 galaxies in tiny ( 25 h − 1 Mpc ) 3 volumes our models can infer the value of Ω m with approximately 12% precision. More importantly, by testing the models on galaxy catalogs from thousands of hydrodynamic simulations, each having a different efficiency of supernova and active galactic nucleus feedback, run with five different codes and subgrid models—IllustrisTNG, SIMBA, Astrid, Magneticum, SWIFT-EAGLE—we find that our models are robust to changes in astrophysics, subgrid physics, and subhalo/galaxy finder. Furthermore, we test our models on 1024 simulations that cover a vast region in parameter space—variations in five cosmological and 23 astrophysical parameters—finding that the model extrapolates really well. Our results indicate that the key to building a robust model is the use of both galaxy positions and velocities, suggesting that the network has likely learned an underlying physical relation that does not depend on galaxy formation and is valid on scales larger than ∼10 h −1 kpc.more » « less
-
Abstract We train graph neural networks on halo catalogs from Gadget N -body simulations to perform field-level likelihood-free inference of cosmological parameters. The catalogs contain ≲5000 halos with masses ≳10 10 h −1 M ⊙ in a periodic volume of ( 25 h − 1 Mpc ) 3 ; every halo in the catalog is characterized by several properties such as position, mass, velocity, concentration, and maximum circular velocity. Our models, built to be permutationally, translationally, and rotationally invariant, do not impose a minimum scale on which to extract information and are able to infer the values of Ω m and σ 8 with a mean relative error of ∼6%, when using positions plus velocities and positions plus masses, respectively. More importantly, we find that our models are very robust: they can infer the value of Ω m and σ 8 when tested using halo catalogs from thousands of N -body simulations run with five different N -body codes: Abacus, CUBEP 3 M, Enzo, PKDGrav3, and Ramses. Surprisingly, the model trained to infer Ω m also works when tested on thousands of state-of-the-art CAMELS hydrodynamic simulations run with four different codes and subgrid physics implementations. Using halo properties such as concentration and maximum circular velocity allow our models to extract more information, at the expense of breaking the robustness of the models. This may happen because the different N -body codes are not converged on the relevant scales corresponding to these parameters.more » « less
An official website of the United States government
